首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   234篇
  免费   9篇
教育   219篇
科学研究   12篇
各国文化   2篇
体育   6篇
文化理论   1篇
信息传播   3篇
  2023年   3篇
  2022年   1篇
  2021年   4篇
  2020年   10篇
  2019年   6篇
  2018年   13篇
  2017年   17篇
  2016年   9篇
  2015年   16篇
  2014年   11篇
  2013年   40篇
  2012年   13篇
  2011年   15篇
  2010年   5篇
  2009年   7篇
  2008年   3篇
  2007年   10篇
  2006年   10篇
  2005年   6篇
  2004年   7篇
  2003年   3篇
  2002年   7篇
  2001年   2篇
  2000年   5篇
  1999年   1篇
  1998年   2篇
  1997年   4篇
  1996年   1篇
  1995年   1篇
  1994年   2篇
  1993年   3篇
  1992年   1篇
  1989年   1篇
  1988年   1篇
  1985年   1篇
  1983年   1篇
  1979年   1篇
排序方式: 共有243条查询结果,搜索用时 937 毫秒
41.
In test assembly, a fundamental difference exists between algorithms that select a test sequentially or simultaneously. Sequential assembly allows us to optimize an objective function at the examinee's ability estimate, such as the test information function in computerized adaptive testing. But it leads to the non-trivial problem of how to realize a set of content constraints on the test—a problem more naturally solved by a simultaneous item-selection method. Three main item-selection methods in adaptive testing offer solutions to this dilemma. The spiraling method moves item selection across categories of items in the pool proportionally to the numbers needed from them. Item selection by the weighted-deviations method (WDM) and the shadow test approach (STA) is based on projections of the future consequences of selecting an item. These two methods differ in that the former calculates a projection of a weighted sum of the attributes of the eventual test and the latter a projection of the test itself. The pros and cons of these methods are analyzed. An empirical comparison between the WDM and STA was conducted for an adaptive version of the Law School Admission Test (LSAT), which showed equally good item-exposure rates but violations of some of the constraints and larger bias and inaccuracy of the ability estimator for the WDM.  相似文献   
42.
Item-pool management requires a balancing act between the input of new items into the pool and the output of tests assembled from it. A strategy for optimizing item-pool management is presented that is based on the idea of a periodic update of an optimal blueprint for the item pool to tune item production to test assembly. A simulation study with scenarios involving different levels of quality of the initial item pool, item writing, and management for a previous item pool from the Law School Admission Test (LSAT) showed that good item-pool management had about the same main effects on the item-writing costs and the number of feasible tests as good item writing, but the two factors showed strong interaction effects.  相似文献   
43.
An increasing number of students with dyslexia enter higher education. As a result, there is a growing need for standardized diagnosis. Previous research has suggested that a small number of tests may suffice to reliably assess students with dyslexia, but these studies were based on post hoc discriminant analysis, which tends to overestimate the percentage of systematic variance, and were limited to the English language (and the Anglo-Saxon education system). Therefore, we repeated the research in a non-English language (Dutch) and we selected variables on the basis of a prediction analysis. The results of our study confirm that it is not necessary to administer a wide range of tests to diagnose dyslexia in (young) adults. Three tests sufficed: word reading, word spelling and phonological awareness, in line with the proposal that higher education students with dyslexia continue to have specific problems with reading and writing. We also show that a traditional postdiction analysis selects more variables of importance than the prediction analysis. However, these extra variables explain study-specific variance and do not result in more predictive power of the model.  相似文献   
44.
Ignoring a level can have a substantial impact on the conclusions of a multilevel analysis. For intercept-only models and for balanced data, we derive these effects analytically. For more complex random intercept models or for unbalanced data, a simulation study is performed. Most important effects concern estimates and corresponding standard errors of the variance parameters at adjacent levels and of the coefficients of the predictors at the ignored and bordering levels. Therefore, we conclude that if the researcher is interested in a specific level, she/he should account for both the upper and lower level. Conclusions are illustrated using empirical data from educational research.  相似文献   
45.
The effects of teachers’ group incentives on student achievement are examined by reviewing theoretical arguments and empirical studies published between 1990 and 2011. Studies from developing countries reported positive effects of group incentives on student test scores. However, experimental studies from developed countries reported insignificant effects. Some of the evidence appears to show a positive association between small group size of teachers and the effectiveness of group incentives. Still, it is uncertain whether the key to successful group incentives in teaching emanates from the incentive size, teacher group size, teacher intrinsic motivation, or type of incentive (rank type vs. non-rank type). Furthermore, most studies show that individual teacher incentives have positive effects unlike studies on group incentives. However, there is a lack of comparative studies of group incentives and individual incentives. We conclude that current empirical evidence has unclear policy implications and recommend additional experimental research.  相似文献   
46.
In spite of all of the technical progress in observed‐score equating, several of the more conceptual aspects of the process still are not well understood. As a result, the equating literature struggles with rather complex criteria of equating, lack of a test‐theoretic foundation, confusing terminology, and ad hoc analyses. A return to Lord's foundational criterion of equity of equating, a derivation of the true equating transformation from it, and mainstream statistical treatment of the problem of estimating the transformation for various data‐collection designs exist as a solution to the problem.  相似文献   
47.
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect sizes are adjusted using Hedges’ small sample bias correction. Next, the within-subject standard deviation is estimated by a 2-level model per study or by using a regression model with the subjects identified using dummy predictor variables. The effect sizes are corrected using an iterative raw data parametric bootstrap procedure. The results indicate that the first and last approach succeed in reducing the bias of the fixed effects estimates. Given the difference in complexity, we recommend the first approach.  相似文献   
48.
49.
Recent studies have shown that the interpretation of graphs is not always easy for students. In order to reason properly about distributions of data, however, one needs to be able to interpret graphical representations of these distributions correctly. In this study, we used Tversky’s principles for the design of graphs to explain how 125 first-year university students interpreted histograms and box plots. We systematically varied the representation that accompanied the tasks between students to identify how the design principles affected students’ reasoning. Many students displayed misinterpretations of histograms and box plots, despite the fact that they had the required knowledge and time to interpret the representations correctly. We argue that the combination of dual process theories and Tversky’s design principles provides a promising theoretical framework, which leads to various possibilities for future research.  相似文献   
50.
Literature shows that feedback that is specific, immediate and goal-oriented is effective on (pre-service) teachers’ performance. Synchronous coaching gives this kind of feedback. Due to immediateness of feedback, pre-service teachers can suffer from cognitive load. We propose a set of standardised keywords through which this performance feedback can be delivered – each keyword acts as a summary for the feedback message. The construction and the selection of the keywords is aimed at the reduction of message ambiguity, while at the same time a low level of cognitive load on the pre-service teacher must be maintained. An in vivo pilot-study with 40 respondents (pre-service teachers and their coaches) supported our hypothesis that usage of such sets of standardised keywords will mitigate the levels of ambiguity and cognitive load. These findings and other considerations for additional research using immediate performance are addressed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号